# Multi-dataset Adaptation

Reranker MiniLM L6 H384 Uncased Gooaq 5 Epoch 1995000
Apache-2.0
This is a cross-encoder model fine-tuned from nreimers/MiniLM-L6-H384-uncased, designed for computing scores of text pairs, suitable for text re-ranking and semantic search tasks.
Text Embedding English
R
ayushexel
24
0
Bart Large Cnn Finetuned For Email And Text
MIT
BART Large CNN is a pre-trained model based on the BART architecture, specifically designed for text summarization tasks.
Text Generation English
B
vapit
17
0
Nuner V1 Orgs
A model fine-tuned from FewNERD-fine-supervised based on numind/NuNER-v1.0 for recognizing organizational entities (ORG) in text
Sequence Labeling Transformers Supports Multiple Languages
N
guishe
6,836
2
Reward Model Deberta V3 Base
MIT
A reward model trained based on human feedback, used to predict answers preferred by humans
Large Language Model Transformers English
R
OpenAssistant
1,193
11
Stt De Conformer Transducer Large
This is a large Conformer-Transducer model for German automatic speech recognition, with approximately 120 million parameters, supporting the transcription of German speech into text.
Speech Recognition German
S
nvidia
66
6
Distill Pegasus Cnn 16 4
PEGASUS is an abstractive summarization model pre-trained with gap sentences, developed by Google Research.
Text Generation Transformers English
D
sshleifer
286
4
Bp Lapsbm1 Xlsr
Apache-2.0
Brazilian Portuguese Wav2vec 2.0 speech recognition model fine-tuned on the LaPS BM dataset
Speech Recognition Transformers Other
B
lgris
20
0
Distilbart Xsum 12 6
Apache-2.0
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Text Generation English
D
sshleifer
1,446
6
Pegasus Arxiv
PEGASUS is a pre-trained abstractive summarization model based on gap sentence extraction, optimized for summarization generation through hybrid and randomization strategies
Text Generation Transformers English
P
google
333
2
Bp Commonvoice10 Xlsr
Apache-2.0
Wav2vec 2.0 model fine-tuned for Brazilian Portuguese speech recognition based on Common Voice 7.0 dataset
Speech Recognition Transformers Other
B
lgris
25
0
Distilbart Xsum 6 6
Apache-2.0
DistilBART is a distilled version of the BART model, focusing on text summarization tasks, significantly reducing model size and inference time while maintaining high performance.
Text Generation English
D
sshleifer
147
0
Bp Sid10 Xlsr
Apache-2.0
This is a Wav2vec 2.0 model fine-tuned for Brazilian Portuguese, trained using the Sidney dataset, suitable for automatic speech recognition tasks in Brazilian Portuguese.
Speech Recognition Transformers Other
B
lgris
21
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase